78 research outputs found
Speeding up Cylindrical Algebraic Decomposition by Gr\"obner Bases
Gr\"obner Bases and Cylindrical Algebraic Decomposition are generally thought
of as two, rather different, methods of looking at systems of equations and, in
the case of Cylindrical Algebraic Decomposition, inequalities. However, even
for a mixed system of equalities and inequalities, it is possible to apply
Gr\"obner bases to the (conjoined) equalities before invoking CAD. We see that
this is, quite often but not always, a beneficial preconditioning of the CAD
problem.
It is also possible to precondition the (conjoined) inequalities with respect
to the equalities, and this can also be useful in many cases.Comment: To appear in Proc. CICM 2012, LNCS 736
Singular ways to search for the Higgs boson
The discovery or exclusion of the fundamental standard scalar is a hot topic,
given the data of LEP, the Tevatron and the LHC, as well as the advanced status
of the pertinent theoretical calculations. With the current statistics at the
hadron colliders, the workhorse decay channel, at all relevant H masses, is H
to WW, followed by W to light leptons. Using phase-space singularity
techniques, we construct and study a plethora of "singularity variables" meant
to facilitate the difficult tasks of separating signal and backgrounds and of
measuring the mass of a putative signal. The simplest singularity variables are
not invariant under boosts along the collider's axes and the simulation of
their distributions requires a good understanding of parton distribution
functions, perhaps not a serious shortcoming during the boson hunting season.
The derivation of longitudinally boost-invariant variables, which are functions
of the four charged-lepton observables that share this invariance, is quite
elaborate. But their use is simple and they are, in a kinematical sense,
optimal.Comment: 19 pages, including 21 figure
Capital allocation for credit portfolios with kernel estimators
Determining contributions by sub-portfolios or single exposures to
portfolio-wide economic capital for credit risk is an important risk
measurement task. Often economic capital is measured as Value-at-Risk (VaR) of
the portfolio loss distribution. For many of the credit portfolio risk models
used in practice, the VaR contributions then have to be estimated from Monte
Carlo samples. In the context of a partly continuous loss distribution (i.e.
continuous except for a positive point mass on zero), we investigate how to
combine kernel estimation methods with importance sampling to achieve more
efficient (i.e. less volatile) estimation of VaR contributions.Comment: 22 pages, 12 tables, 1 figure, some amendment
On the Regularity Property of Differential Polynomials Modulo Regular Differential Chains
International audienceThis paper provides an algorithm which computes the normal form of a rational differential fraction modulo a regular differential chain if, and only if, this normal form exists. A regularity test for polynomials modulo regular chains is revisited in the nondifferential setting and lifted to differential algebra. A new characterization of regular chains is provided
Improving NFS for the Discrete Logarithm Problem in Non-prime Finite Fields
International audienceThe aim of this work is to investigate the hardness of the discrete logarithm problem in fields GF where is a small integer greater than 1. Though less studied than the small characteristic case or the prime field case, the difficulty of this problem is at the heart of security evaluations for torus-based and pairing-based cryptography. The best known method for solving this problem is the Number Field Sieve (NFS). A key ingredient in this algorithm is the ability to find good polynomials that define the extension fields used in NFS. We design two new methods for this task, modifying the asymptotic complexity and paving the way for record-breaking computations. We exemplify these results with the computation of discrete logarithms over a field GF whose cardinality is 180 digits (595 bits) long
Computing Individual Discrete Logarithms Faster in GF with the NFS-DL Algorithm
International audienceThe Number Field Sieve (NFS) algorithm is the best known method to compute discrete logarithms (DL) in finite fields , with medium to large and small. This algorithm comprises four steps: polynomial selection, relation collection, linear algebra and finally, individual logarithm computation. The first step outputs two polynomials defining two number fields, and a map from the polynomial ring over the integers modulo each of these polynomials to . After the relation collection and linear algebra phases, the (virtual) logarithm of a subset of elements in each number field is known. Given the target element in , the fourth step computes a preimage in one number field. If one can write the target preimage as a product of elements of known (virtual) logarithm, then one can deduce the discrete logarithm of the target. As recently shown by the Logjam attack, this final step can be critical when it can be computed very quickly. But we realized that computing an individual DL is much slower in medium-and large-characteristic non-prime fields with , compared to prime fields and quadratic fields . We optimize the first part of individual DL: the \emph{booting step}, by reducing dramatically the size of the preimage norm. Its smoothness probability is higher, hence the running-time of the booting step is much improved. Our method is very efficient for small extension fields with and applies to any , in medium and large characteristic
New Complexity Trade-Offs for the (Multiple) Number Field Sieve Algorithm in Non-Prime Fields
The selection of polynomials to represent number fields crucially determines the efficiency of the Number Field Sieve
(NFS) algorithm for solving the discrete logarithm in a finite field. An important recent work due to Barbulescu et al. builds upon
existing works to propose two new methods for polynomial selection when the target field is a non-prime field. These methods are
called the generalised Joux-Lercier (GJL) and the Conjugation methods. In this work, we propose a new method (which we denote
as ) for polynomial selection for the NFS algorithm in fields , with and .
The new method both subsumes and generalises the GJL and the Conjugation methods and provides new trade-offs for both composite
and prime. Let us denote the variant of the (multiple) NFS algorithm using the polynomial selection method ``{X} by (M)NFS-{X}.
Asymptotic analysis is performed for both the NFS- and the MNFS- algorithms.
In particular, when , for , the complexity of NFS- is better than the complexities
of all previous algorithms whether classical or MNFS. The MNFS- algorithm provides lower complexity compared to
NFS- algorithm; for , the complexity of MNFS-
is the same as that of the MNFS-Conjugation and for , the complexity of MNFS-
is lower than that of all previous methods
- …